Physics 215 (1st Semester AY 2022-2023)

Richelle Jade L. Tuquero

Session 1: HPC and the Julia framework

\textbf{OBJECTIVE}: Confirm Julia framework and Base speed

KR1

Use @code_* to examine a simple function. The * is replaceable by native, typed, warntype, and others. Discover them.

We want to define a function that will determine the position of an object after some time $t$ with constant velocity called pos. Before that we check if there is an existing function with the same name.

Since there is no documentation for pos, we will now define the function such that it will solve the equation $x=x_0+vt$.

The @code_native

We will now start examining the function pos() using @code_native. To get a general idea of what @code_native is, we will use help ?.

The general idea that we can infer from the definition of the @code_native is that this prints the native assembly of the instructions for the machine to implement the function. We will now try using different combinations of type for our input in the function pos() to get a better idea what @code_native does.

We confirm that @code_native presents the assembly language where we can observe the changes in the instructions and registers depending on the type of the input variables of the function. We also notice the changes of the instructions depending on the type of the inputs (all Int64 or Float64). Moreover, if the type of the inputs are different then there are additional instructions like scvtf where Julia converted one type to use the fmul instruction. This shows that the @code_native shows us the instructions of the machine and how julia manages different input types, and that is by converting the input types such that they have the same type inputs before proceeding with the instructions.

The @code_warntype

Before we begin using @code_warntype, we must know what it does by accessing help ?.

From the given definition, we get an overall idea that the @code_warntype displays the methods and warn us about possible errors. To get a better idea with this, let us use the function we have previously defined.

As shown, we observe that @code_warntype presents us with the arguements and body of the function that is executed by Julia. It also appears as a pseudo-code which prints the process being ran by the program.

Notice that there are no possible source of errors in the printed result. A reason for this is that even if the type inputs are different, there is only a single type in the body of the code. For the sake of discussion, let us define a function that will return the body mass ratio or 0 if there is no mass. We will refer to this as bmi if there is no existing function with the same name.

We notice that there are possible errors shown in red due to different variables called in the body and the output. This can be fixed by either making sure that the type of the variables will not be a problem, or by determining the type of the variable using typeof() or eltype(). Note that if we want to convert a variable into a specific type, then we use the function convert() which has two inputs - the type and the variable.

We define a modified version of bmi(), which we will refer to as bmi_fix().

As shown, there is no longer any possible errors marked in red since the body now has only one type.

The @code_typed

Let us determine the definition of @code_typed.

Based on the definition for @code_typed, this prints the type-inferred lowered form (IR) unlike the @code_warntype which is ASTs. To better understand this we implement the simple function previously created.

We notice that when the input variables are both integers (default is Int64 in my computer), then the type of the output is also Int64. This is the same case for inputs of Float64, which will generate a Float64 output. This tells us that the @code_typed matches the type of the input and output values. However, if we use a combination of Float64 and Int64 as our inputs, it follows the native assembly function we previously showed that wants to use the same data type for the inputs. Hence, the output for combinations of Float64 and Int64 is Float64.

The @code_lowered

Let us check what @code_lowered does.

As mentioned in the description, the @code_lowered is similar to @code_typed however @code_lowered returns it in lower forms (IR). Moreover, we also notice how similar it is to the written code for the function and unlike the @code_typed we don't see the types of the variables in each step. Moreover, there is also no change in the output of @code_lowered for different combinations of input types of the function. Therefore, this is mainly focused on how to run the function rather than the type.

The @code_llvm

Lastly for KR1, we will now consider @code_llvm.

Running the @code_llvm shows the compiled LLVM bitcode for the given function. Unlike the rest of the @code_*, we find it hard to read the printed result of @code_llvm. However, there are some instructions which are similar to that shown when using @code_native such as converting the type of a variable for different input types. This tells us that this is similar to an assembly language.

KR2

Demonstrate that Julia is able to determine constants in codes.

Let us define two simple function, circum() for with constant and rec_area() for the one without constant.

To determine if Julia can determine the constants, we examine the printed outputs of @code_llvm and @code_typed for both functions.

We observe that Julia can indeed determine constants by the presence of pointers in @code_llvm and a constant input in %1 for @code_typed.

KR3

Demonstrate Julia’s type-inference and multiple dispatch.

To determine the type of a variable or number, we simply use typeof() or eltype(). As an example we show

Let us now try to demonstrate multiple dispatch. In our function, we will implement multiple methods depending on the type of the variable. But first, to avoid overloading existing functions we check if there are any documentations of a function with name typecheck.

Since there are none, we can now define function typecheck().

We defined one function, with 8 different methods where the function typecheck(x) behaves differently or implements the function depending on the type of x.

As we can see, there is now a function named typecheck which we previously defined. We now implement or demonstrate the outputs of typecheck(x).

KR3

Show the difference, if any, between your own sum function my_sum(x::Vector) and @time. Use a for-loop for your customized sum function.

We will now use the @time macro to determine how long in seconds does it take to run the function and check the allocations.

We observe that the second time that we run the function my_sum() is significantly faster than the first time. Moreover, the allocations have also decreased such that there is only 1 allocation for the second run. This can be explained because the way Julia functions is that the compilation of the code or function is only for the first time that we ran my_sum() as noted by the output in the second time we ran @time.

For better comparisson, let us also check an existing function sum in Julia and implement it with the same values.

We observe that the speeds of the function my_sum() and sum() are different but the allocations after the first run are the same. To get a better comparison of the results, we use the macro @elapsed to get the time in seconds only, without the other details such as allocations that are present in @time.

We observed that while the existing function sum takes longer to compile, it is generally faster than the defined function my_sum() after compilation.

KR4

Replicate plotting the Mandelbrot. Use a separate file Mandelbrot.jl to contain the function code. Use include() function to load the file.

To replicate the Mandelbrot, we first import Mandelbrot.jl which contains the function mandelbrot. This function implements the mandelbrot equation given by $z_{n+1} = z_n+c$.

Now that we have mandelbrot function, we now create a function which we will call to easily plot the Mandelbrot.

We now plot the Mandelbrot set using the mandelbrot() and plotMandelbrot() for different grid size and iterations.

We observe that to get the details of the Mandelbrot set, the number of iterations must be higher than 100. In the tested iterations, 500 and more iterations gave the best details. Moreover, to have sharper resolutions of our Mandelbrot set, we need higher number of grids or shorter intervals.

KR5

Plot of the time it takes for the function to run using @time macro for the given grid size n.

We define a function that will return the time it takes to run the function plotMandelbrot() for different grid sizes n and iterations m. We will refer to them as tmandelbrot_grid() and tmandelbrot_iter(), respectively.

KR6

Disuss the computational complexity of the Madelbrot function you made based onKR5. What is the best @time output to use for this?

To discuss the computational complexity of the Mandelbrot function, we use CurveFit to determine its behavior.

In storing the time it takes, we use the macro @elapsed which only gives only the time in the @time for every value of the grid sizes n and iteration m. Afterwards, to get a better description of the behavior for varying n and m, we implemented a curve fit. Based on the resulting curve fit for the time it takes to run plotMandelbrot() which also contains the function mandelbrot(), we get that the time complexity for grid sizes n is quadratic as shown by the quadratic fit. Moreover, we also showed that increasing the number of iteration n is directly proportional to time resulting to a linear fit. Thus, the time complexity for n is $\mathcal{O}(n^2)$ while it is $\mathcal{O}(m)$ for m.